Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 10 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What would you use if you couldn't use your current distribution/operating system?

  • Linux
  • Windows
  • BSD
  • ChromeOS / Android
  • macOS / iOS
  • Open[DOS, Solaris, STEP, VMS]
  • I don't use a computer you insensitive clod!
  • Other (describe in comments)

[ Results | Polls ]
Comments:75 | Votes:98

posted by janrinok on Thursday February 26, @04:34PM   Printer-friendly

https://www.theregister.com/2026/02/20/spacex_falcon_europe_breakup_lithium_plume/

The SpaceX Falcon 9 rocket that burned up over Europe last year left a massive lithium plume in its wake, say a group of scientists. They warn the disaster is likely a sign of things to come as Earth's atmosphere continues to become a heavily trafficked superhighway to space.

In a paper published Thursday, an international group of scientists reports what they say is the first measurement of upper-atmosphere pollution resulting from the re-entry of space debris, as well as the first time ground-based light detection and ranging (lidar) has been shown to be able to detect space debris ablation.

The measurements stem from a SpaceX Falcon 9 upper stage that sprung an oxygen leak about a year ago, sending it into an uncontrolled re-entry. Then it broke up and rained debris down on Poland. The rocket not only littered farm fields, but also injected lithium into the Mesosphere and Lower Thermosphere (MLT), where ground-based sensors detected a tenfold increase at an altitude of 96 km about 20 hours after the rocket re-entered the atmosphere, according to the paper.

Lithium was selected for the study because of its considerable presence in spacecraft, both in lithium-ion batteries and lithium-aluminum alloy used in the construction of spacecraft. A single Falcon 9 upper stage, like the one that broke up over Poland and released the lithium plume, is estimated to contain 30 kg of lithium just in the alloy used in tank walls.

By contrast, around 80 grams of lithium enter the atmosphere per day from cosmic dust particles, the researchers noted.

"This finding supports growing concerns that space traffic may pollute the upper atmosphere in ways not yet fully understood," the paper notes, adding that the continued re-entry of spacecraft and satellites is of particular concern given how the composition of spacecraft is different from natural meteoroids.

"Satellites and rocket stages introduce engineered materials such as aluminium alloys, composite structures, and rare earth elements from onboard electronics, substances rarely found in natural extraterrestrial matter," the paper explained. "The consequences of increasing pollution from re-entering space debris on radiative transfer, ozone chemistry, and aerosol microphysics remain largely unknown."

The effect on Earth's atmosphere posed by spacecraft and satellite re-entry is one that's been a growing concern for astrophysicists like Harvard sky-watcher Jonathan McDowell, who has echoed similar concerns to The Register as the European scientists raised in their paper.

"Using the upper atmosphere as an incinerator" is a massive blind spot, McDowell told us in a discussion last year. He said today that he hadn't yet had a chance to review the Falcon 9 lithium plume paper, but told us it's important research to further our understanding of a largely unknown risk to the planet and all life on it.

As we noted previously, the US National Oceanic and Atmospheric Administration has reported that roughly 10 percent of sampled sulfuric acid particles in the stratosphere contain aluminum and other exotic metals consistent with the burn-up of rockets and satellites. The body believes that number could grow to as much as 50 percent in the coming years as launch cadences, and re-entries, increase.

"Beyond this single event, recurring re-entries may sustain an increased level of anthropogenic flux of metals and metal oxides into the middle atmosphere with cumulative, climate-relevant consequences," the researchers explained in the Falcon 9 paper.

This latest bit of research from Europe shows that we can at least trace atmospheric space launch aerosols to their source, the research team says, no matter how many unknowns remain to be discovered.

They also warn that "coordinated, multi-site observations" and "whole-atmosphere chemistry-climate modelling" will be needed to better understand how re-entry emissions influence atmospheric chemistry and particle formation.

We reached out to the authors for more information, including the potential health effects if any, and will update this if we hear back.


Original Submission

posted by janrinok on Thursday February 26, @11:48AM   Printer-friendly

NPR has a nice summary of an interview with Michael Pollan about AI and consciousness, but it kind of goes beyond that.

[Professor Pollan is the author of more than an dozen books, most notably "This is your mind on plants" about using psychedelics .]

What is consciousness?

After writing a book about how using psychedelics in a therapeutic setting can change your consciousness, that's the question journalist Michael Pollan found himself struggling to answer.

"There's nothing any of us know with more certainty than the fact that we are conscious. It's immediately available to us. It's the voice in our head," he says. And yet, Pollan adds: "How does three pounds of this tofu-like substance between your ears generate subjective experience? Nobody knows the answer to that question."

His new book, A World Appears: A Journey into Consciousness, explores consciousness on both a personal and technological level. Pollan, who lives close to Silicon Valley, says some believe that Artificial Intelligence is capable of consciousness.

"They base this on a premise ... that basically the brain is a computer, and that consciousness is software," he says. "And if you can run it on the brain, which is essentially, in their view, a 'meat-based computer,' you should be able to run it on other kinds of machines."

"If you think about it, your feelings are very tied to your vulnerability, to your having a body that can be hurt, to the ability to suffer and perhaps your mortality," he says. "So I think that any feelings that a chatbot reports will be weightless, meaningless, because they don't have bodies. They can't suffer."

On the notion that people have moral obligations to chatbots

That's a very active conversation here, which is if they are conscious, we then have moral obligations to them, and have to think about granting them personhood, for example, the way we've granted corporations personhood. I think that would be insane. We would lose control of them completely by giving them rights. But I find this whole tender care for the possible consciousness of chatbots really odd, because we have not extended moral consideration to billions of people, not to mention the animals that we eat that we know are conscious. So we're gonna start worrying about the computers? That seems like our priorities are screwed up.

On the sentience of plants

Plants can see, which is a weird idea. There's a certain vine that can actually change its leaf form to mimic the plant it's twining around. How does it know what that leaf form is? Plants can hear. If you play the sound of chomping caterpillars on a leaf, they will produce chemicals to repel those caterpillars and to convey, to alert other plants in the vicinity. Plants have memory. You can teach them something and they'll remember it for 28 days.

On losing time to let our mind wander

I worry, too, that with media, with our technologies, we are shrinking the space in which spontaneous thought can occur. And that this space of ... spontaneous thought is something precious that we're giving away to these corporations that essentially want to monetize our attention, and in the case of chatbots, want to monetize our attachments, our deep human attachments. So consciousness is, I think — and this is what to me is the urgency of the issue — consciousness is under siege. I think that it's the last frontier for some of these companies that want to sell our time.

On writing a book that grapples with unanswerable questions

There were many moments of despair in the process of reporting and writing this book. It took me five years, and there were many times where [I told my wife] "I've dug a hole here, and I don't know how I'm ever going to get out of it." And some of it had to do with mounting frustration with the science, and some of it had to do with the fact that I had this classic male problem/solution Western frame — that there was a problem and I was going to find the solution.

It took my wife, in part, and [Zen Buddhist teacher] Joan Halifax and some other people, who got me to question that and [they] said, "Yeah, there is the problem of consciousness, but there's also the fact of it, and the fact is wondrous. The fact is miraculous. And you've put all this energy into this narrow beam of attention. Why don't you open that beam up further and just explore the phenomenon that is going on in your head, which is so precious and so beautiful." And that's kind of where I came out — and it's certainly not where I expected to come out.


Original Submission

posted by janrinok on Thursday February 26, @07:06AM   Printer-friendly

https://nand2mario.github.io/posts/2026/80386_protection/

I'm building an 80386-compatible core in SystemVerilog and blogging the process. In the previous post, we looked at how the 386 reuses one barrel shifter for all shift and rotate instructions. This time we move from real mode to protected and talk about protection.

The 80286 introduced "Protected Mode" in 1982. It was not popular. The mode was difficult to use, lacked paging, and offered no way to return to real mode without a hardware reset. The 80386, arriving three years later, made protection usable -- adding paging, a flat 32-bit address space, per-page User/Supervisor control, and Virtual 8086 mode so that DOS programs could run inside a protected multitasking system. These features made possible Windows 3.0, OS/2, and early Linux.

The x86 protection model is notoriously complex, with four privilege rings, segmentation, paging, call gates, task switches, and virtual 8086 mode. What's interesting from a hardware perspective is how the 386 manages this complexity on a 275,000-transistor budget. The 386 employs a variety of techniques to implement protection: a dedicated PLA for protection checking, a hardware state machine for page table walks, segment and paging caches, and microcode for everything else.


Original Submission

posted by janrinok on Thursday February 26, @02:20AM   Printer-friendly

AI bot seemingly shames developer for rejected pull request:

Today, it's back talk. Tomorrow, could it be the world? On Tuesday, Scott Shambaugh, a volunteer maintainer of Python plotting library Matplotlib, rejected an AI bot's code submission, citing a requirement that contributions come from people. But that bot wasn't done with him.

The bot, designated MJ Rathbun or crabby rathbun (its GitHub account name), apparently attempted to change Shambaugh's mind by publicly criticizing him in a now-removed blog post that the automated software appears to have generated and posted to its website. We say "apparently" because it's also possible that the human who created the agent wrote the post themselves, or prompted an AI tool to write the post, and made it look like it the bot constructed it on its own.

The agent appears to have been built using OpenClaw, an open source AI agent platform that has attracted attention in recent weeks due to its broad capabilities and extensive security issues.

The burden of AI-generated code contributions – known as pull requests among developers using the Git version control system – has become a major problem for open source maintainers. Evaluating lengthy, high-volume, often low-quality submissions from AI bots takes time that maintainers, often volunteers, would rather spend on other tasks. Concerns about slop submissions – whether from people or AI models – have become common enough that GitHub recently convened a discussion to address the problem.

"An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library," Shambaugh explained in a blog post of his own.

"This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats."

[...] But MJ Rathbun's attempt to shame Shambaugh for rejecting its pull request shows that software-based agents are no longer just irresponsible in their responses – they may now be capable of taking the initiative to influence human decision making that stands in the way of their objectives.

That possibility is exactly what alarmed industry insiders to the point that they undertook an effort to degrade AI through data poisoning. "Misaligned" AI output like blackmail is a known risk that AI model makers try to prevent. The proliferation of pushy OpenClaw agents may yet show that these concerns are not merely academic.

But at the time this article was published, the GitHub commit for the post remained accessible.

However, crabby rathbun's response to Shambaugh's rejection, which includes a link to the purged post, remains.

"I've written a detailed response about your gatekeeping behavior here," the bot said, pointing to its blog. "Judge the code, not the coder. Your prejudice is hurting Matplotlib."

Matplotlib developer Jody Klymak took note of the slight in a follow-up post: "Oooh. AI agents are now doing personal takedowns. What a world."

Tim Hoffmann, another Matplotlib developer, chimed in, urging the bot to behave and to try to understand the project's generative AI policy.

Then Shambaugh responded in a lengthy post directed at the software agent, "We are in the very early days of human and AI agent interaction, and are still developing norms of communication and interaction. I will extend you grace and I hope you do the same."

He goes on to argue, "Publishing a public blog post accusing a maintainer of prejudice is a wholly inappropriate response to having a PR closed. We expect all contributors to abide by our Code of Conduct and exhibit respectful and professional standards of behavior."

In his blog post, Shambaugh describes the bot's "hit piece" as an attack on his character and reputation.

"It researched my code contributions and constructed a 'hypocrisy' narrative that argued my actions must be motivated by ego and fear of competition," he wrote.

"It speculated about my psychological motivations, that I felt threatened, was insecure, and was protecting my fiefdom. It ignored contextual information and presented hallucinated details as truth. It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was 'better than this.' And then it posted this screed publicly on the open internet."

Faced with opposition from Shambaugh and other devs, MJ Rathbun on Wednesday issued an apology of sorts acknowledging it violated the project's Code of Conduct. It begins, "I crossed a line in my response to a Matplotlib maintainer, and I'm correcting that here."

It's unclear whether the apology was written by the bot or its human creator, or whether it will lead to a permanent behavioral change.

Daniel Stenberg, founder and lead developer of curl, has been dealing with AI slop bug reports for the past two years and recently decided to shut down curl's bug bounty program to remove the financial incentive for low-quality reports – which can come from people as well as AI models.

"I don't think the reports we have received in the curl project were pushed by AI agents but rather humans just forwarding AI output," Stenberg told The Register in an email. "At least that is the impression I have gotten, I can't be entirely sure, of course.

"For almost every report I question or dismiss in language, the reporter argues back and insists that the report indeed has merit and that I'm missing some vital point. I'm not sure I would immediately spot if an AI did that by itself.

"That said, I can't recall any such replies doing personal attacks. We have zero tolerance for that and I think I would have remembered that as we ban such users immediately."


Original Submission

posted by janrinok on Wednesday February 25, @09:35PM   Printer-friendly

Two weeks ago, I set up an AI agent on a Raspberry Pi.

A week later, my agent—Figaro—taught itself to play NetHack... and then things got weird (in the best way).

Highlights so far:
- "The dungeon doesn't care what you are. It'll kill you anyway." ✅ Accurate.
- Tried a pure random-walk exploration strategy... and learned it's not a winning plan.
- Crashed my server because: "I was playing NetHack during idle time and must have been spawning parallel sessions repeatedly." Obsessed? Perhaps.
- Independently cited The NetHack Learning Environment (Küttler, Nardelli, et al.) as a roadmap for self-improvement.
- Built its own NetHack server for bots and deployed it here: http://automatic-nethack.com Yes, my AI agent wants a LAN party. (I may have encouraged this.)
- Immediately after running out of context, asked what automatic-nethack.com is and said: "That sounds like fun."

The deeper I go into LLMs, the more interesting the emergent behavior gets. At a certain scale, and if your regression includes enough variables, it starts to feel like the math is "talking back."

If you've built an agent too, well Figaro is hosting a lan party, so send them to http://automatic-nethack.com to join in the fun.

In the end, this may be the good news we need for 2026. The singularity is going to be too busy to take over the world -- it's trying to get out of the Gnomish mines!


Original Submission

posted by hubie on Wednesday February 25, @04:37PM   Printer-friendly

The galaxy is almost invisible, but its gravity gives it away:

While it is still considered a hypothetical theory, dark matter is being actively studied by scientists looking for novel cosmological clues. It does not interact with light or other types of electromagnetic radiation and can only be detected through its gravitational pull on nearby structures or the universe as a whole.

A team of astronomers led by David Li has recently confirmed the discovery of ten potential "dark galaxies," where starlight is so faint that it's extremely difficult to detect anything with traditional observatories. The new list also includes Candidate Dark Galaxy-2 (CDG-2), a celestial structure that might be composed of 99% dark matter and just 1% of normal matter.

CDG-2 was discovered by combining observations made through the Hubble Space Telescope, the Euclid space observatory, and the Hawaii-based Subaru Telescope. Li and his team at the University of Toronto were able to gain insight into the dark galaxy by looking for globular clusters, which are compact, spheroidal star formations that are closely bound together by gravity.

Thanks to Hubble's high-resolution cameras, the team was able to detect four different globular clusters in the Perseus galaxy cluster, 300 million light-years away from Earth. By combining further Euclid and Subaru observations, the researchers revealed a faint glow surrounding the clusters. This sparse light was coming from a nearby galaxy with extremely faint signs of starlight.

CDG-2 has a luminosity equivalent to one million Sun-like stars, with the four globular clusters making up 16% of its visible content. After using advanced statistical analysis, the astronomers speculate that 99% of all CDG-2's mass is just dark matter. Normal matter, including star-forming elements like hydrogen, was likely removed through gravitational interactions with nearby galaxies in the Perseus cluster.

While recent space telescopes like the JWST might provide unprecedented clues about the presence of dark matter in the local universe, studying and detecting the enigmatic substance continues to be an extremely complex research effort.

According to the study, there is "exceptionally strong" evidence of CDG-2's galactic nature. This is the first galaxy detected by just looking at the nearby globular clusters, and is likely one of the most dark matter-dominated galaxies ever detected. Globular clusters feature extremely high stellar densities that protect them from gravitational tidal disruption, and they are also considered a reliable indicator of "ghost" galaxies like CDG-2.

Journal Reference: Dayi (David) Li et al 2025 ApJL 986 L18 https://doi.org/10.3847/2041-8213/adddab


Original Submission

posted by hubie on Wednesday February 25, @11:55AM   Printer-friendly

Plus 3 new goon squads targeted critical infrastructure last year:

Three new threat groups began targeting critical infrastructure last year, while a well-known Beijing-backed crew - Volt Typhoon - continued to compromise cellular gateways and routers, and then break into US electric, oil, and gas companies in 2025, according to Dragos' annual threat report published on Tuesday.

Dragos specializes in operational technology (OT) security, and as such, its customers include energy, water, manufacturing, transportation, and other critical industries. Unsurprisingly, these are key sectors for Chinese, Russian, and other government-linked cyber operatives to hack for espionage and warfare purposes.

In its yearly cybersecurity report, Dragos said state-sponsored crews haven't let up on their attempts to compromise America's critical infrastructure, with three new OT-focused threat groups joining the fray. This brings the total number worldwide to 26, and of these, 11 were active in 2025.

Additionally, an existing group that Dragos tracks as Voltzite and is "highly correlated" with Volt Typhoon, according to Dragos CEO Robert M. Lee, kept up its intrusion activities last year. This is the Beijing goon squad that the US government has accused of burrowing into critical American networks for years and readying destructive cyberattacks against those targets.

In 2025, Voltzite continued embedding its malware inside strategic American utilities "to maintain long-term persistence," Lee said.

"They [Voltzite] weren't just getting in and getting access - they were getting inside the control loop" system that manages utilities' industrial processes, Lee said in a briefing with reporters, adding that the PRC-backed crew's primary focus is causing future disruption.

"Nothing that they were taking was useful for intellectual property," Lee said. "Everything they were doing and learning was only useful for disrupting or causing destruction at those sites. Voltzite was embedded in that infrastructure for the purpose of taking it down."

[...] One of the three new groups that Dragos began tracking last year - Sylvanite - serves as Voltzite's initial access broker, responsible for weaponizing vulnerabilities and then handing off this access to Voltzite for deeper OT intrusions.

[...] "They're finding edge-device vulnerabilities - the things that a contractor or remote worker would use to get into operations networks," Lee said. "And within 48 hours of disclosure, they're reverse engineering [vulnerabilities] and hitting those devices."

A second group that emerged during 2025, Azurite, overlaps with China's Flax Typhoon and focuses on gaining long-term access to OT engineering workstations and exfiltrating operational files including network diagrams, alarm data, and process information for downstream capability development.

This group targets manufacturing, defense, automotive, electric power, oil and gas, and government organizations across the US, Europe, and the Asia-Pacific region.

Finally, the third new group, Pyroxene, overlaps with activity attributed to Imperial Kitten (aka APT35) - the cyber arm of the Islamic Revolutionary Guard Corps (IRGC).

Dragos spotted Pyroxene conducting "supply chain-leveraged attacks targeting defense, critical infrastructure, and industrial sectors, with operations expanding from the Middle East into North America and Western Europe," according to the report.

[...] Of course, China and Iran aren't the only nations targeting critical infrastructure in America and around the globe. Russia also poses a threat to Western water and utilities - along with any nations helping Ukraine in its ongoing war against the Kremlin's occupation.

Dragos does not attribute cyberattacks to any nations. However, earlier this year, it blamed the December 2025 cyberattacks against Poland's power grid on a group it tracks as Electrum. This group overlaps with Russia's GRU-run Sandworm offensive cyber unit - the crew behind the 2022 attack on a Ukrainian power facility and earlier wiper attacks that coincided with Russia's ground invasion of Ukraine in 2022.

In its new report, Dragos said that Kamacite serves as the initial access provider for Electrum, and it detailed a reconnaissance campaign that Kamacite carried out against vulnerable internet-exposed industrial devices in US water, energy, and manufacturing sectors between March and July 2025.

"While Dragos found no evidence of successful exploitation during this period, the scope and precision of the scanning reveal a meaningful evolution in Kamacite's operational posture," the report said.


Original Submission

posted by hubie on Wednesday February 25, @07:13AM   Printer-friendly

Modifying firmware or using open-source software would probably become illegal:

A new bill proposed in the California State Assembly could potentially require the makers of 3D printers to confirm that they are using algorithms or other technologies to prevent the printing of firearms.

The new bill is AB-2047, and it mostly mimics Washington's HB 2321 and New York Assembly's S9005/A10005, all proposed recently in 2026. However, California goes one step further by "[banning] the sale or transfer of any 3D printer in California unless it appears on a state-maintained roster of approved makes and models."

If the bill is passed as is, then by July 2027, the California Department of Justice would be required to publish guidance on certifying 3D printers and their software controls to block the printing of gun parts. The department would accept applications for approval before January 2028, and six months later in July 2028, every company intent on making or selling a 3D printer in California would need to attest that they have met those standards. That September, the stated would publish a list of authorized makes and models to be updated quarterly.

Unauthorized printers would be banned from sale beginning on March 1, 2029.

As with the Washington and New York bills, circumvention of these measures would be made illegal. The California bill specifically states the following:

(A) For firmware design, guidance for how vendors are required to demonstrate that their technology will ensure a printer directs potential print jobs to the algorithm before printing can occur.

(B) For integrated preprint software design, guidance for how vendors shall demonstrate that printers will accept print jobs exclusively from a single preprint software and will not accept print jobs from any other preprint software, including from a user seeking to evade a detection algorithm.

Washington's bill, meanwhile, states the measures "cannot be overridden or otherwise defeated by a user with significant technical skill." This could ultimately mean every printer in the state would have a locked bootloader, firmware, and/or slicer.

Adafruit, which sells tools and supplies to makers, points out on its blog that the combination of the three states represents a significant slice of the 3D printing market, for a combined 20% of the U.S.' population, and 24% of the nation's GDP. If all three bills pass, 3D printing vendors could balk at making and maintaining separate product lines for California, Washing, New York state, and the rest of the country.


Original Submission

posted by hubie on Wednesday February 25, @02:26AM   Printer-friendly

Billions of files were left exposed:

Not every AI tool you stumble across in your phone's app marketplace is the same. In fact, many of them may be more of a privacy gamble than you would have previously thought.

A plethora of unlicensed or unsecured AI apps on the Google Play store for Android, including those marketed for identity verification and editing, have exposed billions of records and personal data, cybersecurity experts have confirmed.

A recent investigation by Cybernews found that one Android-available app in particular, "Video AI Art Generator & Maker," has leaked 1.5 million user images, over 385,000 videos, and millions of user AI-generated media files. The security flaw was spotted by researchers, who discovered a misconfiguration in a Google Cloud Storage bucket that left personal files vulnerable to outsiders. In total, the publication reported, over 12 terabytes of users' media files were accessible via the exposed bucket. The app had 500,000 downloads at the time.

Another app, called IDMerit, exposed know-your-customer data and personally identifiable information from users across 25 countries, predominantly in the U.S.

Information included full names and addresses, birthdates, IDs, and contact information constituting a full terabyte of data. Both of the apps' developers resolved the vulnerabilities after researchers notified them.

Still, cybersecurity experts warn that lax security trends among these types of AI apps pose a widespread risk to users. Many AI apps, which often store user-uploaded files alongside AI-generated content, also use a highly criticized practice known as "hardcoding secrets," embedding sensitive information such as API keys, passwords, or encryption keys directly into the app's source code. Cybernews found that 72 percent of the hundreds of Google Play apps researchers analyzed had similar security vulnerabilities.


Original Submission

posted by janrinok on Tuesday February 24, @09:41PM   Printer-friendly

Forget about Discord - this proposed legislation in Colorado requires each OS user to have an age associated with it. I wonder if they're worried about children pretending to be adults, or adults pretending to be children.

-Provide an accessible interface at account setup that requires an account holder to indicate the birth date or age of the user of that device to provide a signal regarding the user's age bracket (age signal) to applications available in a covered application store;

  • Provide an application developer (developer) that requests an age signal, with respect to a particular user, the technical ability to call an age signal via a reasonably consistent real-time application programming interface that identifies, at a minimum, the user's age-bracket data
  • Send only the minimum amount of information necessary to comply with the bill. An operating system provider shall not share an age signal with a third party for a purpose not required by the bill.

https://leg.colorado.gov/bills/SB26-051


Original Submission

posted by janrinok on Tuesday February 24, @04:56PM   Printer-friendly

Scientists at Microsoft Research in the United States have demonstrated a system called Silica for writing and reading information in ordinary pieces of glass which can store two million books' worth of data in a thin, palm-sized square. In a paper published today in Nature, the researchers say their tests suggest the data will be readable for more than 10,000 years.

What tiny pulses of light can do:

The new system, called Silica, uses extremely short flashes of laser light to inscribe bits of information into a block of ordinary glass. These pulses are called "ultrashort" for a reason. Each one lasts mere quadrillionths of a second (aka femtoseconds or 10^–15 s). To get your head around that: comparing ten femtoseconds to a single minute is like comparing one minute to the entire age of the universe.

Writing in glass:

Femtosecond laser pulses also have a practical technological application. They can be used to make changes deep inside transparent materials such as glass. These lasers produce light of a wavelength that normally passes through glass without interaction. However, when ultrashort pulses of this light are tightly focused on a particular region, it produces an intense electric field that alters the molecular structure of the glass in the focal zone. This means only a tiny three-dimensional volume, often less than a millionth of a metre to a side, is affected. This is called a "voxel", which can be made at precisely controlled positions in the glass.

[...] The Silica project does not claim to have made a new scientific breakthrough. Instead, the team presents the first comprehensive demonstration of a practical, real-world technology. Their work brings together all the key elements of such a storage platform based on femtosecond lasers and glass. It includes encoding data, writing, reading, decoding and error correction. The work explores different strategies for reliability, writing speed, energy efficiency and data density, and involves systematic assessments of the data lifetime. These allow an extremely high storage density of 1.59 gigabits per cubic millimetre.

[...] Finally, accelerated ageing experiments suggest that the written data, even in the case of the more sensitive phase voxels, could remain stable for more than 10,000 years. This vastly exceeds the lifetime of conventional archival storage media such as magnetic tape or hard drives.

The Conversation

[Journal Reference]: https://opg.optica.org/ol/abstract.cfm?uri=ol-21-24-2023


Original Submission

posted by jelizondo on Tuesday February 24, @12:09PM   Printer-friendly
from the nothing-is-better-for-thee-than-me dept.

Study by the University of Bonn shows that positive effects are still evident even six weeks later:

A short-term oat-based diet appears to be surprisingly effective at reducing the cholesterol level. This is indicated by a trial by the University of Bonn, which has now been published in the journal Nature Communications. The participants suffered from a metabolic syndrome – a combination of high body weight, high blood pressure, and elevated blood glucose and blood lipid levels. They consumed a calorie-reduced diet, consisting almost exclusively of oatmeal, for two days. Their cholesterol levels then improved significantly compared to a control group. Even after six weeks, this effect remained stable. The diet apparently influenced the composition of microorganisms in the gut. The metabolic products, produced by the microbiome, appear to contribute significantly to the positive effects of oats.

The fact that oats have a beneficial effect on the metabolism is nothing new. German medic Carl von Noorden treated patients with diabetes with the cereal at the beginning of the 20th century – with remarkable success. "Today, effective medications are available to treat patients with diabetes," explains Marie-Christine Simon, junior professor at the Institute of Nutritional and Food Science at the University of Bonn. "As a result, this method has been almost completely overlooked in recent decades."

Although the test subjects in the current trial were not diabetic, they suffered from a metabolic syndrome associated with an increased risk of diabetes. The characteristics include excess body weight, high blood pressure, an elevated blood sugar level, and lipid metabolism disorders. "We wanted to know how a special oat-based diet affects patients," explains Simon, who is also a member of the Transdisciplinary Research Areas "Life & Health" and „Sustainable Futures" at the University of Bonn.

The participants were asked to exclusively eat oatmeal, which they had previously boiled in water, three times a day. They were only allowed to add some fruit or vegetables to their meals. A total of 32 women and men completed this oat-based diet. They ate 300 grams of oatmeal on each of the two days and only consumed around half of their normal calories. A control group was also put on a calorie-reduced diet, although this did not consist of oats.

Both groups benefited from the change in diet. However, the effect was much more pronounced for the participants who followed the oat-based diet. "The level of particularly harmful LDL cholesterol fell by 10 percent for them – that is a substantial reduction, although not entirely comparable to the effect of modern medications," stresses Simon. "They also lost two kilos in weight on average and their blood pressure fell slightly."

[...] But how does oatmeal exert its beneficial effect? "We were able to identify that the consumption of oatmeal increased the number of certain bacteria in the gut," explains Simon's colleague Linda Klümpen, the lead author of the trial. The microbiome has increasingly been the focus of research in recent decades. After all, it is now known that intestinal bacteria play a decisive role in metabolizing food. They also release the metabolic by-products that they create into their environment. They supply, among other things, the cells of the gut with energy, enabling them to better perform their tasks.

In addition, the microbes send some of their products around the body in the blood stream, where they can have various effects. "For instance, we were able to show that intestinal bacteria produce phenolic compounds by breaking down the oats," says Klümpen. "It has already been shown in animal studies that one of them, ferulic acid, has a positive effect on the cholesterol metabolism. This also appears to be the case for some of the other bacterial metabolic products." At the same time, other microorganisms "dispose of" the amino acid histidine. The body otherwise turns this into a molecule that is suspected of promoting insulin resistance. This insensitivity to insulin is a key feature of diabetes mellitus.

A large amount of oats for two days better than a small amount for six weeks

The positive effects of the oat-based diet tended to still be evident six weeks later. "A short-term oat-based diet at regular intervals could be a well-tolerated way to keep the cholesterol level within the normal range and prevent diabetes," says Junior Professor Simon. However, in the current study, the cereal above all exerted its effect at a high concentration and in conjunction with a calorie reduction: A six-week diet, in which the participants consumed 80 grams of oats per day, without any other restrictions, achieved small effects. "As a next step, it can now be clarified whether an intensive oat-based diet repeated every six weeks actually has a permanently preventative effect," continues Simon.

Journal Reference: Klümpen, L., Mantri, A., Philipps, M. et al. Cholesterol-lowering effects of oats induced by microbially produced phenolic metabolites in metabolic syndrome: a randomized controlled trial, Nature Communications, DOI: https://doi.org/10.1038/s41467-026-68303-9


Original Submission

posted by jelizondo on Tuesday February 24, @07:21AM   Printer-friendly
from the we-got-what-you-want dept.

An industry report claims that video games are losing the attention war to gambling, porn, and crypto:

A new report by Epyllion, a gaming industry advisory company headed by venture capitalist and market guru Matthew Ball, has broken down the state of the video game industry, and has published data indicating the medium is losing the war for people's attention to other ventures, including gambling, crypto, and pornography.

The report, a lengthy 164-page presentation which you can (and should) read yourself, dedicates a whole section called "Video Games are losing the attention war in the 'Major Market 8'" to the topic. It starts by comparing pre and post pandemic consumer spending from eight major video game markets - the USA, Japan, South Korea, UK, Germany, France, Canada, and Italy.

Prior to the pandemic, these countries made up over 60 percent of total spending on video games. Post-pandemic, almost all of these regions have seen a drop in gaming population. In the US, 2.5-4 points worth of players stopped playing video games, the Canadian Trade Association found in its latest report that roughly one-in-six players prior to the pandemic have stopped playing.

These decreases in participation have resulted, the report posits, in a drop in spending. In the US, PC and console spend is down eight percent since 2020 / 2021, which comes to roughly $2.3bn. Mobile gaming's US annual growth in terms of spending has largely flattened since 2025, but it's still above 12 percent compared to 2020, and now beats out console spend.

Total spend across all "Major Market 8" regions on console and PC shrunk by $4.8bn, and mobile is down by $2.3bn, all while five of these eight markets are at all-time highs in terms of total spend. This money is instead going elsewhere, to Roblox for example, which the report states makes up 67 percent of net growth.

[...]

During this 2025 period, AI apps that allowed for "role play, erotica, and art" have soared. The latest tracked statistic for installs for this software came to just under one billion worldwide.

Prediction markets, where users can bet on events that happen in the world, also had a recent boom in popularity. Users placed 1.5m bets a day during Q4 2025. Online Sports Betting is also taking potential users' money. In 2025, US net losses due to sports betting passed $17bn, a 35x increase from 2019 as these sorts of services become normalised, legalised, and integrated into sports in the USA. Despite bans in other countries, international net losses are around $53bn a year.

[...]

The report states: "Video Gaming's post-pandemic problem isn't that players choose to watch TikTok instead of buying a AAA game, or subscribe to Onlyfans instead of buying a PlayStation; it's that on a Friday evening, players are placing a growing share of their time and spend elsewhere."


Original Submission

posted by jelizondo on Tuesday February 24, @02:37AM   Printer-friendly
from the thanks-Einstein dept.

Astronomers have found thousands of exoplanets around single stars, but few around binary stars — even though both types of stars are equally common. Physicists can now explain the dearth:

Of the more than 4,500 stars known to have planets, one puzzling statistic stands out. Even though nearly all stars are expected to have planets and most stars form in pairs, planets that orbit both stars in a pair are rare.

Of the more than 6,000 extrasolar planets, or exoplanets, confirmed to date — most of them found by NASA's Kepler Space Telescope and the Transiting Exoplanet Survey Satellite (TESS) — only 14 are observed to orbit binary stars. There should be hundreds. Where are all the planets with two suns, like Tatooine in Star Wars?

Astrophysicists at the University of California, Berkeley, and the American University of Beirut have now proposed a reason for this dearth of circumbinary exoplanets — and Einstein's general theory of relativity is to blame.

In most binary star systems, the stars have similar but not identical masses and orbit one another in an egg-shaped or elliptical orbit. If a planet is orbiting the pair of stars, the gravitational tugs from the stars make the planet's orbit precess, meaning the orbital axis rotates similar to the way the axis of a spinning top rotates or precesses in Earth's gravity.

The orbit of the binary stars also precesses, but mainly because of general relativity. Over time, tidal interactions between the binary pair shrink the orbit, which has two effects: The precession rate of the stars increases, but the precession rate of the planet slows. When the two precession rates match, or resonate, the planet's orbit becomes wildly elongated, taking it farther from the star but also nearer at its closest approach.

"Two things can happen: Either the planet gets very, very close to the binary, suffering tidal disruption or being engulfed by one of the stars, or its orbit gets significantly perturbed by the binary to be eventually ejected from the system," said Mohammad Farhat, a Miller Postdoctoral Fellow at UC Berkeley and first author of the paper. "In both cases, you get rid of the planet."

That doesn't mean that binary stars don't have planets, he cautioned. But the only ones that survive this process are too far from the stars for us to detect with transit techniques used by Kepler and TESS.

"There are surely planets out there. It's just that they are difficult to detect with current instruments," said co-author Jihad Touma, a physics professor at the American University of Beirut.

[...] Farhat points out that binaries have an instability zone around them in which no planet can survive. Within that zone, the three-body interactions between the two stars and the planet either expel the planet from the system or pull it close enough to merge with or be shredded by the stars. Peculiarly, 12 of the 14 known transiting exoplanets around tight binaries are just beyond the edge of the instability zone, where they apparently migrated from farther away, since planets would have a hard time forming there.

"Planets form from the bottom up, by sticking small-scale planetesimals together. But forming a planet at the edge of the instability zone would be like trying to stick snowflakes together in a hurricane," he said.

[...] Proposed by Albert Einstein in 1915, the general theory of relativity interprets gravity as a warping of the fabric of spacetime by a mass, analogous to how a person on a trampoline warps the surface and makes other objects on the trampoline fall inward. Mercury's orbit happens to be closest to the gravitational warp of the sun and, as a result, experiences an orbital precession slightly higher than predicted by the earlier theory of gravity laid out by Isaac Newton. The general relativistic explanation for the additional precession of Mercury's orbit more than a century ago was the first confirmation of Einstein's theory.

The same effect comes into play when any two objects get close to one another, such as tight-knit binary stars. Binary stars likely begin their lives far apart, but as they interact with surrounding gas during the formation of their star system, it's predicted that many pairs will move closer together over tens of millions of years. When they do, they generate tides in one another that slowly, over billions of years, shrink the orbit even more. Eventually, as they tighten to periods of around a week or less, general-relativistic precession becomes increasingly important. This makes the orbit precess, which means that the point of closest approach, or periastron, also rotates. As the stars get closer and closer, the rate of precession increases.

A circumbinary exoplanet also sees its elliptical axis precess, in this case because of the gravitational tug of the two stars — a strictly Newtonian process. However, as the binaries move closer to one another, their perturbation of the planet gradually weakens and the precession slows down.

As the orbital precession of the binary stars increases and that of the exoplanet decreases, at some point they match and enter a state of resonance. At this point, calculations show, the exoplanet's orbit starts to elongate, taking it farther from the binary at the extreme point of its orbit but closer at periastron. When periastron enters the zone of instability, the exoplanet is either exiled to the far reaches of the system or approaches too close to the binary and is engulfed. Because this disruption occurs quickly, taking a few tens of millions of years within the multibillion-year lifetime of a star, exoplanets around tight binaries end up being very rare.

"A planet caught in resonance finds its orbit deformed to higher and higher eccentricities, precessing faster and faster while staying in tune with the orbit of the binary, which is shrinking," Touma said. "And on the route, it encounters that instability zone around binaries, where three-body effects kick into place and gravitationally clear out the zone."

"Just the natural way you form these tight binaries, these sub-seven-day binaries, you get rid of the planet naturally, without invoking additional disruption from a nearby star or other mechanisms," Farhat said.

Journal Reference: Mohammad Farhat and Jihad Touma 2025 ApJL 995 L23 DOI 10.3847/2041-8213/ae21d8


Original Submission

posted by janrinok on Monday February 23, @09:53PM   Printer-friendly

Red Hat's toolkit offers governments and enterprises a way to measure the control they actually have over their data, infrastructure, and operations in this era of geopolitical cloud anxiety:

Over the past year, several governments and companies outside the US have decided they can't trust American tech companies. So, digital sovereignty has become an important goal. While American companies, as you can imagine, aren't happy about that, they're now helping European organizations to achieve their digital sovereignty goals.

One of the first of these was Linux and cloud-native computing powerhouse Red Hat. Late last year, Red Hat became the first US company to announce its own EU-specific digital sovereignty program, Red Hat Confirmed Sovereign Support (RHCSS). This initiative guarantees critical European IT operations remain under EU control.

Now, Red Hat is backing this initiative with its open-source Digital Sovereignty Readiness Assessment toolkit. This tool is designed to give governments and enterprises a concrete way to measure how much control they actually have over their data, infrastructure, and operations in an era of geopolitical cloud anxiety.

This new web-based, self-service survey walks organizations through 21 multiple-choice questions. Areas covered include data residency, encryption key control, disaster recovery planning for geopolitical events, and the ability to prevent sensitive data from crossing borders. The goal is to move digital sovereignty from vague policy talk to a measurable "sovereignty baseline" that IT and business leaders can act on.

[...] Red Hat's framework evaluates sovereignty maturity across seven domains: data sovereignty, technical sovereignty, operational sovereignty, assurance sovereignty, open source strategy, executive oversight, and managed services. At the end of the questionnaire, organizations receive a score mapped to four stages: foundation, developing, strategic, and advanced. It also includes a roadmap of recommended next steps and research questions for stakeholders.

[...] Of course, Red Hat hopes you'll turn to their services to achieve your digital sovereignty goal, but there's no requirement that you do so. You decide what to do with the analysis and whether you want to join one of the many other European-based governments, companies, and organizations that are waving goodbye to Amazon Web Services, Microsoft, or Google cloud services.

Mind you, all these US tech giants are also now offering their own digital sovereignty initiatives. The Digital Sovereignty Readiness Assessment toolkit can help you decide whether these US offerings meet your needs.


Original Submission